23 research outputs found

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Enabling Social Virtual Reality Experiences Using Pass-Through Video

    Get PDF
    Appropriately segmented portions from a video stream are captured by an outward-facing camera mounted on a virtual reality (VR) device and inserted into a VR environment. The outward-facing camera can be the onboard camera on the VR device or it can be a separate camera mounted on the VR device. Because the camera view from the point of view of the VR user is similar to a full three-dimensional (3D) model of the user’s environment, rendered from the user’s location, the video stream can be inserted directly into the VR environment by segmenting out the relevant pixels and placing them into the VR environment as a 3D object. In this way, a high-quality VR experience, including desired aspects of the user’s physical and social environment, can be provided in most settings without expensive 3D modeling or avatar generation

    Automated Reconstruction of Building Facades for Virtual Walk-thrus

    No full text
    In this paper, we present a fast, automated approach to generating a highly detailed, textured 3D building facade model. This model is acquired at the ground level by driving a vehicle equipped with laser scanners and a digital camera under normal traffic conditions on public roads, and then processed offline. We evaluate our approach on a large data set acquired in downtown Berkeley

    A.: Constructing 3d city models by merging ground-based and airborne views

    No full text
    In this paper, we present a fast approach to automated generation of textured 3D city models with both high details at ground level, and complete coverage for bird’s-eye view. A close-range facade model is acquired at the ground level by driving a vehicle equipped with laser scanners and a digital camera under normal traffic conditions on public roads; a far-range Digital Surface Map (DSM), containing complementary roof and terrain shape, is created from airborne laser scans, then triangulated, and finally texture mapped with aerial imagery. The facade models are first registered with respect to the DSM by using Monte-Carlo-Localization, and then merged with the DSM by removing redundant parts and filling gaps. The developed algorithms are evaluated on a data set acquired in downtown Berkeley
    corecore